26 research outputs found
Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System
In this paper we present the results of an investigation of the importance of
verbs in a deep learning QA system trained on SQuAD dataset. We show that main
verbs in questions carry little influence on the decisions made by the system -
in over 90% of researched cases swapping verbs for their antonyms did not
change system decision. We track this phenomenon down to the insides of the
net, analyzing the mechanism of self-attention and values contained in hidden
layers of RNN. Finally, we recognize the characteristics of the SQuAD dataset
as the source of the problem. Our work refers to the recently popular topic of
adversarial examples in NLP, combined with investigating deep net structure.Comment: Accepted to Analyzing and interpreting neural networks for NLP
workshop at EMNLP 201
How much should you ask? On the question structure in QA systems
Datasets that boosted state-of-the-art solutions for Question Answering (QA)
systems prove that it is possible to ask questions in natural language manner.
However, users are still used to query-like systems where they type in keywords
to search for answer. In this study we validate which parts of questions are
essential for obtaining valid answer. In order to conclude that, we take
advantage of LIME - a framework that explains prediction by local
approximation. We find that grammar and natural language is disregarded by QA.
State-of-the-art model can answer properly even if 'asked' only with a few
words with high coefficients calculated with LIME. According to our knowledge,
it is the first time that QA model is being explained by LIME.Comment: Accepted to Analyzing and interpreting neural networks for NLP
workshop at EMNLP 201
Electrically Connecting Bacteria to Nanoparticles for Biotechnological Applications
Combining abiotic photosensitisers such as semiconductor fluorescence emitting
nanoparticles – quantum dots (QDs), with non-photosynthetic bacteria ‘in vivo’
presents an intriguing concept into the design of artificial photosynthetic organisms
and solar-driven fuel production. Shewanella oneidensis MR-1 (MR-1) is a versatile
bacterium concerning respiration, metabolism and biocatalysis, and is a very
promising organism for artificial photosynthesis. The bacteria’s synthetic and
catalytic abilities, together with their longevity, provide a promising system for
bacterial biohydrogen production. MR-1’s hydrogenases are present in the
periplasmatic space, and it follows QDs or their electrons will need to enter the
periplasm via the Mtr pathway that is responsible for the extracellular electrontransfer
ability of MR-1. Firstly, various QDs were tested for their nanotoxicology
and further for interaction with MR-1 by fluorescence and electron microscopy.
CdTe/CdS/TGA, CdTe/CdS/Cysteamine, commercial negatively charged CdTe
and CuIn2S/ZnS/PMAL QD were examined, and it was found that the latter two
showed no toxicity for MR-1 as evaluated by a colony-forming units method and a
fluorescence viability assay. Only commercial negatively charged CdTe QDs
showed good interaction with MR-1. Detailed investigation of the above interaction
by transmission electron microscopy showed QDs were placed both inside the cell
and close to the membrane. Subsequently, the photoreduction power of QDs was
evaluated by the methyl viologen assays with different sacrificial electron donors.
It was indeed found that QDs have reduction potential sufficiently low to perform
MV photoreduction. As assessed by gas chromatography, CdTe/CdS/TGA and
negatively charged CdTe QDs supported hydrogen evolution in Shewanella
putrefaciens CN-32. The above results establish a proof of concept for
photosynthetic production of biohydrogen by CN-32. Further research should be
invested in the use of biocompatible sacrificial electron donors and the
development of appropriate bacteria mutants that would help to understand the
assisted by QDs hydrogen evolution in this bacterium
Applying SoftTriple Loss for Supervised Language Model Fine Tuning
We introduce a new loss function TripleEntropy, to improve classification
performance for fine-tuning general knowledge pre-trained language models based
on cross-entropy and SoftTriple loss. This loss function can improve the robust
RoBERTa baseline model fine-tuned with cross-entropy loss by about (0.02% -
2.29%). Thorough tests on popular datasets indicate a steady gain. The fewer
samples in the training dataset, the higher gain -- thus, for small-sized
dataset it is 0.78%, for medium-sized -- 0.86% for large -- 0.20% and for
extra-large 0.04%
Funeral Directors: Hidden Victims or Copers? And Research Portfolio
The focus of this research is the quantitative description of funeral directors in terms of: general health and mood; attitudes towards death; and, work and coping strategies. Within this investigation comparisons were made of those who are and those who are not involved in disaster work. The results show that the whole sample had low anxiety and concern regarding death. Symptoms of psychological distress were more evident in those not involved in disaster work. Whereas those who were involved in disaster work showed lower levels of psychological distress, more effective coping, and work practices which are in keeping with recommendations of previous researchers
A Strong Baseline for Fashion Retrieval with Person Re-Identification Models
Fashion retrieval is the challenging task of finding an exact match for
fashion items contained within an image. Difficulties arise from the
fine-grained nature of clothing items, very large intra-class and inter-class
variance. Additionally, query and source images for the task usually come from
different domains - street photos and catalogue photos respectively. Due to
these differences, a significant gap in quality, lighting, contrast, background
clutter and item presentation exists between domains. As a result, fashion
retrieval is an active field of research both in academia and the industry.
Inspired by recent advancements in Person Re-Identification research, we
adapt leading ReID models to be used in fashion retrieval tasks. We introduce a
simple baseline model for fashion retrieval, significantly outperforming
previous state-of-the-art results despite a much simpler architecture. We
conduct in-depth experiments on Street2Shop and DeepFashion datasets and
validate our results. Finally, we propose a cross-domain (cross-dataset)
evaluation method to test the robustness of fashion retrieval models.Comment: 33 pages, 14 figure
Multi-modal Embedding Fusion-based Recommender
Recommendation systems have lately been popularized globally, with primary
use cases in online interaction systems, with significant focus on e-commerce
platforms. We have developed a machine learning-based recommendation platform,
which can be easily applied to almost any items and/or actions domain. Contrary
to existing recommendation systems, our platform supports multiple types of
interaction data with multiple modalities of metadata natively. This is
achieved through multi-modal fusion of various data representations. We
deployed the platform into multiple e-commerce stores of different kinds, e.g.
food and beverages, shoes, fashion items, telecom operators. Here, we present
our system, its flexibility and performance. We also show benchmark results on
open datasets, that significantly outperform state-of-the-art prior work.Comment: 7 pages, 8 figure
A Deep Learning Approach for Automatic Detection of Qualitative Features of Lecturing
Artificial Intelligence in higher education opens new possibilities for
improving the lecturing process, such as enriching didactic materials, helping
in assessing students' works or even providing directions to the teachers on
how to enhance the lectures. We follow this research path, and in this work, we
explore how an academic lecture can be assessed automatically by quantitative
features. First, we prepare a set of qualitative features based on teaching
practices and then annotate the dataset of academic lecture videos collected
for this purpose. We then show how these features could be detected
automatically using machine learning and computer vision techniques. Our
results show the potential usefulness of our work.Comment: 10 pages, 9 figure
Hematopoietic stem cell mobilization with the reversible CXCR4 receptor inhibitor plerixafor (AMD3100)—Polish compassionate use experience
Recent developments in the field of targeted therapy have led to the discovery of a new drug, plerixafor, that is a specific inhibitor of the CXCR4 receptor. Plerixafor acts in concert with granulocyte colony-stimulating factor (G-CSF) to increase the number of stem cells circulating in the peripheral blood (PB). Therefore, it has been applied in the field of hematopoietic stem cell mobilization. We analyzed retrospectively data regarding stem cell mobilization with plerixafor in a cohort of 61 patients suffering from multiple myeloma (N = 23), non-Hodgkin’s lymphoma (N = 20), or Hodgkin’s lymphoma (N = 18). At least one previous mobilization attempt had failed in 83.6% of these patients, whereas 16.4% were predicted to be poor mobilizers. The median number of CD34+ cells in the PB after the first administration of plerixafor was 22/μL (range of 0–121). In total, 85.2% of the patients proceeded to cell collection, and a median of two (range of 0–4) aphereses were performed. A minimum of 2.0 × 106 CD34+ cells per kilogram of the patient’s body weight (cells/kg b.w.) was collected from 65.6% of patients, and the median number of cells collected was 2.67 × 106 CD34+ cells/kg b.w. (0–8.0). Of the patients, 55.7% had already undergone autologous stem cell transplantation, and the median time to neutrophil and platelet reconstitution was 12 and 14 days, respectively. Cases of late graft failure were not observed. We identified the diagnosis of non-Hodgkin’s lymphoma and previous radiotherapy as independent factors that contributed to failure of mobilization. The current report demonstrates the satisfactory efficacy of plerixafor plus G-CSF for stem cell mobilization in heavily pre-treated poor or predicted poor mobilizers